首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2187篇
  免费   170篇
  国内免费   139篇
电工技术   35篇
综合类   131篇
化学工业   52篇
金属工艺   23篇
机械仪表   88篇
建筑科学   59篇
矿业工程   18篇
能源动力   59篇
轻工业   18篇
水利工程   8篇
石油天然气   16篇
武器工业   32篇
无线电   229篇
一般工业技术   135篇
冶金工业   70篇
原子能技术   36篇
自动化技术   1487篇
  2024年   5篇
  2023年   18篇
  2022年   29篇
  2021年   33篇
  2020年   37篇
  2019年   39篇
  2018年   25篇
  2017年   42篇
  2016年   51篇
  2015年   55篇
  2014年   90篇
  2013年   90篇
  2012年   99篇
  2011年   148篇
  2010年   113篇
  2009年   137篇
  2008年   171篇
  2007年   156篇
  2006年   156篇
  2005年   128篇
  2004年   84篇
  2003年   87篇
  2002年   77篇
  2001年   64篇
  2000年   58篇
  1999年   61篇
  1998年   55篇
  1997年   43篇
  1996年   50篇
  1995年   58篇
  1994年   41篇
  1993年   37篇
  1992年   30篇
  1991年   23篇
  1990年   12篇
  1989年   15篇
  1988年   15篇
  1987年   6篇
  1986年   7篇
  1985年   9篇
  1984年   4篇
  1982年   5篇
  1981年   3篇
  1980年   4篇
  1979年   5篇
  1978年   4篇
  1977年   2篇
  1976年   5篇
  1975年   2篇
  1959年   2篇
排序方式: 共有2496条查询结果,搜索用时 169 毫秒
51.
基于SMIC 0.18μm CMOS混合信号工艺设计了一种低功耗轨对轨运算放大器,并用Specie仿真器对运放的各种性能参数进行了仿真.运放采用3.3V电源,输入共模电压和输出摆幅均达到了轨对轨,输入级跨导在整个输入共模电压范围内仅变化15%,直流开环增益为99dB,单位增益带宽为3.2MHz,相位裕度为59°(10pF负载电容),功耗为0.55mW.  相似文献   
52.
维吾尔语框架语义知识库的概念设计   总被引:1,自引:1,他引:0  
该文对维吾尔语的框架语义描述体系及内容进行了初步探讨和尝试,建立了维吾尔语框架语义文档的树型结构。根据维吾尔语框架语义知识库的描述内容及框架语义网络自身的特点,该文在数据库中以维吾尔语框架语义为核心进行信息存储,设计了语义知识库的概念模型。为创建基于认知的维吾尔语框架语义知识库建设探索了一条可行的技术路线、方法和思路。  相似文献   
53.
针对现有的可信计算平台体系认证技术是静态的、一次性的,并不基于语义等问题,提出了语义远程认证抽象模型,该模型由代码特征化部件、安全监测部件、安全控制部件、应用接口部件、知识库组成。其中关键问题知识库采用离线创建规则的学习模式,供在线检测。  相似文献   
54.
围绕中小企业经营及管理业务的内容,设计并提出一套整合业务知识的高职计算机网络技术专业人才培养模式,并从岗位出发,基于学生特点制定适合每一个学生的技能训练方案,对中小企业推进信息化建设所需网络人才的培养进行了积极的探索与研究。  相似文献   
55.
Recently, Aceto, Fokkink and Ingólfsdóttir proposed an algorithm to turn any sound and ground-complete axiomatisation of any preorder listed in the linear time-branching time spectrum at least as coarse as the ready simulation preorder, into a sound and ground-complete axiomatisation of the corresponding equivalence—its kernel. Moreover, if the former axiomatisation is ω-complete, so is the latter. Subsequently, de Frutos Escrig, Gregorio Rodríguez and Palomino generalised this result, so that the algorithm is applicable to any preorder at least as coarse as the ready simulation preorder, provided it is initials preserving. The current paper shows that the same algorithm applies equally well to weak semantics: the proviso of initials preserving can be replaced by other conditions, such as weak initials preserving and satisfying the second τ-law. This makes it applicable to all 87 preorders surveyed in “the linear time-branching time spectrum II” that are at least as coarse as the ready simulation preorder. We also extend the scope of the algorithm to infinite processes, by adding recursion constants. As an application of both extensions, we provide a ground-complete axiomatisation of the CSP failures equivalence for BCCS processes with divergence.  相似文献   
56.
One of the main problems in operational risk management is the lack of loss data, which affects the parameter estimates of the marginal distributions of the losses. The principal reason is that financial institutions only started to collect operational loss data a few years ago, due to the relatively recent definition of this type of risk. Considering this drawback, the employment of Bayesian methods and simulation tools could be a natural solution to the problem. The use of Bayesian methods allows us to integrate the scarce and, sometimes, inaccurate quantitative data collected by the bank with prior information provided by experts. An original proposal is a Bayesian approach for modelling operational risk and for calculating the capital required to cover the estimated risks. Besides this methodological innovation a computational scheme, based on Markov chain Monte Carlo simulations, is required. In particular, the application of the MCMC method to estimate the parameters of the marginals shows advantages in terms of a reduction of capital charge according to different choices of the marginal loss distributions.  相似文献   
57.
Evaluation of Localized Semantics: Data, Methodology, and Experiments   总被引:1,自引:0,他引:1  
We present a new data set of 1014 images with manual segmentations and semantic labels for each segment, together with a methodology for using this kind of data for recognition evaluation. The images and segmentations are from the UCB segmentation benchmark database (Martin et al., in International conference on computer vision, vol. II, pp. 416–421, 2001). The database is extended by manually labeling each segment with its most specific semantic concept in WordNet (Miller et al., in Int. J. Lexicogr. 3(4):235–244, 1990). The evaluation methodology establishes protocols for mapping algorithm specific localization (e.g., segmentations) to our data, handling synonyms, scoring matches at different levels of specificity, dealing with vocabularies with sense ambiguity (the usual case), and handling ground truth regions with multiple labels. Given these protocols, we develop two evaluation approaches. The first measures the range of semantics that an algorithm can recognize, and the second measures the frequency that an algorithm recognizes semantics correctly. The data, the image labeling tool, and programs implementing our evaluation strategy are all available on-line (kobus.ca//research/data/IJCV_2007). We apply this infrastructure to evaluate four algorithms which learn to label image regions from weakly labeled data. The algorithms tested include two variants of multiple instance learning (MIL), and two generative multi-modal mixture models. These experiments are on a significantly larger scale than previously reported, especially in the case of MIL methods. More specifically, we used training data sets up to 37,000 images and training vocabularies of up to 650 words. We found that one of the mixture models performed best on image annotation and the frequency correct measure, and that variants of MIL gave the best semantic range performance. We were able to substantively improve the performance of MIL methods on the other tasks (image annotation and frequency correct region labeling) by providing an appropriate prior.  相似文献   
58.
In this paper, consistency is understood in the standard way, i.e. as the absence of a contradiction. The basic constructive logic BKc4, which is adequate to this sense of consistency in the ternary relational semantics without a set of designated points, is defined. Then, it is shown how to define a series of logics by extending BKc4 up to minimal intuitionistic logic. All logics defined in this paper are paraconsistent logics.  相似文献   
59.
Category Partition Method (CPM) is a general approach to specification-based program testing, where test frame reduction and refinement are two important issues. Test frame reduction is necessary since too many test frames may be produced, and test frame refinement is important since during CPM testing new information about test frame generation may be achieved and considered incrementally. Besides the information provided by testers or users, implementation related knowledge offers alternative information for reducing and refining CPM test frames. This paper explores the idea by proposing a call patterns semantics based test frame updating method for Prolog programs, in which a call patterns analysis is used to collect information about the way in which procedures are used in a program. The updated test frames will be represented as constraints. The effect of our test frame updating is two-fold. On one hand, it removes “uncared” data from the original set of test frames; on the other hand, it refines the test frames to which we should pay more attention. The first effect makes the input domain on which a procedure must be tested a subset of the procedure’s input domain, and the latter makes testers stand more chance to find out the faults that are more likely to show their presence in the use of the program under consideration. Our test frame updating method preserves the effectiveness of CPM testing with respect to the detection of faults we care. The test case generation from the updated set of test frames is also discussed. In order to show the applicability of our method an approximation call patterns semantics is proposed, and the test frame updating on the semantics is illustrated by an example.
Lingzhong ZhaoEmail:
  相似文献   
60.
There is considerable interest in the potential for using operational research (O.R.) in developing countries. One sign of this is the formation of new societies for O.R. scientists in countries and regions where no such society had existed. Since 2003, such societies have been formed in several parts of Africa. This paper focuses on West Africa, and presents a bibliography of papers relating to applications of O.R. in the nations of this part of the continent. The paper describes the way in which the bibliography was collated and discusses the overall picture that the list of papers presents of the state of O.R. in the 18 countries that are considered.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号